21 research outputs found

    Generation and Rendering of Interactive Ground Vegetation for Real-Time Testing and Validation of Computer Vision Algorithms

    Get PDF
    During the development process of new algorithms for computer vision applications, testing and evaluation in real outdoor environments is time-consuming and often difficult to realize. Thus, the use of artificial testing environments is a flexible and cost-efficient alternative. As a result, the development of new techniques for simulating natural, dynamic environments is essential for real-time virtual reality applications, which are commonly known as Virtual Testbeds. Since the first basic usage of Virtual Testbeds several years ago, the image quality of virtual environments has almost reached a level close to photorealism even in real-time due to new rendering approaches and increasing processing power of current graphics hardware. Because of that, Virtual Testbeds can recently be applied in application areas like computer vision, that strongly rely on realistic scene representations. The realistic rendering of natural outdoor scenes has become increasingly important in many application areas, but computer simulated scenes often differ considerably from real-world environments, especially regarding interactive ground vegetation. In this article, we introduce a novel ground vegetation rendering approach, that is capable of generating large scenes with realistic appearance and excellent performance. Our approach features wind animation, as well as object-to-grass interaction and delivers realistically appearing grass and shrubs at all distances and from all viewing angles. This greatly improves immersion, as well as acceptance, especially in virtual training applications. Nevertheless, the rendered results also fulfill important requirements for the computer vision aspect, like plausible geometry representation of the vegetation, as well as its consistence during the entire simulation. Feature detection and matching algorithms are applied to our approach in localization scenarios of mobile robots in natural outdoor environments. We will show how the quality of computer vision algorithms is influenced by highly detailed, dynamic environments, like observed in unstructured, real-world outdoor scenes with wind and object-to-vegetation interaction

    Simulationsgestützte Landmarkendetektion, Lokalisierung und Modellgenerierung für mobile Systeme

    No full text
    For the meaningful interaction of mobile systems with their environment, a geometric and semantic environment detection and description is fundamental. The quintessence of this dissertation is a concept for model-based environment detection, coupled with object-based localization. The goal is to describe the environment in semantic environmental models and, in conjunction with a simulation system, to represent it as a comprehensive semantic world model. The concept of the present dissertation includes the three main components environment detection, localization and environmental modeling. The recognition of surrounding objects and the assignment of suitable semantics is presented on the basis of the development and implementation of sensor data processing algorithms for different application scenarios. The implemented landmark-based localization makes it possible to locate the detected objects in a common frame of reference so that they can subsequently link additional data sources. The broad spectrum of applications starts with simple scenarios of mobile robotics and extends to the participation of fully autonomous automobiles on the road. With increasing complexity of the field of application, the requirements for the environmental detection and description also increase. In addition, a focus on the concept of situational awareness takes place, starting with the observation of data from individual sensors, through the semantic interpretation of the measurements, up to application-spanning, holistic modeling in the simulation system. The present work extends the currently predominant notion of primarily sensory environment detection and modeling to the essential aspects of semantic recognition, description and management. Semantic modeling - similar to human perception of the environment - enables the implementation of complex applications that "understand" their environment. Thus, they need to be able to capture, classify and, in some cases, to abstract complex relationships. Eventually, they are able to interact safely and reasonably. The introduction of a formal and modular concept, which interconnects the three domains, creates a uniform and comprehensive solution for various applications. The present work lays the foundation for further developments in the field of collectively usable, multi-application world models and the realized applications show the impressive potential of this research

    Semantic Environment Perception, Localization and Mapping

    No full text
    corecore